Picture for Ziran Wang

Ziran Wang

A Universal and Robust Framework for Multiple Gas Recognition Based-on Spherical Normalization-Coupled Mahalanobis Algorithm

Add code
Jan 05, 2026
Viaarxiv icon

Digital Twin AI: Opportunities and Challenges from Large Language Models to World Models

Add code
Jan 04, 2026
Viaarxiv icon

SNM-Net: A Universal Framework for Robust Open-Set Gas Recognition via Spherical Normalization and Mahalanobis Distance

Add code
Dec 28, 2025
Viaarxiv icon

FSDAM: Few-Shot Driving Attention Modeling via Vision-Language Coupling

Add code
Nov 16, 2025
Figure 1 for FSDAM: Few-Shot Driving Attention Modeling via Vision-Language Coupling
Figure 2 for FSDAM: Few-Shot Driving Attention Modeling via Vision-Language Coupling
Figure 3 for FSDAM: Few-Shot Driving Attention Modeling via Vision-Language Coupling
Figure 4 for FSDAM: Few-Shot Driving Attention Modeling via Vision-Language Coupling
Viaarxiv icon

ViLaD: A Large Vision Language Diffusion Framework for End-to-End Autonomous Driving

Add code
Aug 18, 2025
Viaarxiv icon

A Hierarchical Test Platform for Vision Language Model (VLM)-Integrated Real-World Autonomous Driving

Add code
Jun 17, 2025
Viaarxiv icon

Inference Acceleration of Autoregressive Normalizing Flows by Selective Jacobi Decoding

Add code
May 30, 2025
Viaarxiv icon

ALN-P3: Unified Language Alignment for Perception, Prediction, and Planning in Autonomous Driving

Add code
May 21, 2025
Viaarxiv icon

Generative AI for Autonomous Driving: Frontiers and Opportunities

Add code
May 13, 2025
Viaarxiv icon

NuPlanQA: A Large-Scale Dataset and Benchmark for Multi-View Driving Scene Understanding in Multi-Modal Large Language Models

Add code
Mar 17, 2025
Figure 1 for NuPlanQA: A Large-Scale Dataset and Benchmark for Multi-View Driving Scene Understanding in Multi-Modal Large Language Models
Figure 2 for NuPlanQA: A Large-Scale Dataset and Benchmark for Multi-View Driving Scene Understanding in Multi-Modal Large Language Models
Figure 3 for NuPlanQA: A Large-Scale Dataset and Benchmark for Multi-View Driving Scene Understanding in Multi-Modal Large Language Models
Figure 4 for NuPlanQA: A Large-Scale Dataset and Benchmark for Multi-View Driving Scene Understanding in Multi-Modal Large Language Models
Viaarxiv icon